Emotion Expression Functions in Multimodal Presentation
نویسندگان
چکیده
With the increase of multimedia content on the WWW, multimodal presentations using interactive lifelike agents become an attra ctive style to deliver information. However, for many people it is not easy to write multimodal presentations. This is because of the complexity of describing various behaviors of character agents based on a particular character system with individual (often low-level) description languages. In order to overcome this complexity and to allow many people to write attractive multimodal presentations easily, MPML (Multimodal Presentation Markup Language) has been developed to provide a mediumlevel description language commonly applicable to many character sys tems. In this paper, we present a new emotion function attached to MPML. With this function, we are able to express emotion-rich behaviors of character agents in MPML. Some multimodal presentation content is produced in the new version of MPML to show the effectiveness of the new emotion expression function.
منابع مشابه
Multimodal Presentation Markup Language MPML With Emotion Expression Functions Attached
With the increase of multimedia contents in the WWW, multimodal presentation using interactive life-like agents is attractive and becoming important. However, it is not easy for many people to write such multimodal presentations, due to the complexity of describing various behaviors of character agent and their interaction of particular character system with individual ( often low-level ) descr...
متن کاملA Multimodal Presentation Mark-up Language for Enhanced Affective Presentation
Nowadays, agent systems are flourishing. We can see them presenting news on the Internet, or guiding virtual tourists in 3D worlds. All these agents are considered as a new interface between user and computer, more friendly and natural. This involves natural voice communication and natural behaviour management. Our interest is to create a language called Multimodal Presentation Mark-up Language...
متن کاملMultimodal Emotion Recognition Integrating Affective Speech with Facial Expression
In recent years, emotion recognition has attracted extensive interest in signal processing, artificial intelligence and pattern recognition due to its potential applications to human-computer-interaction (HCI). Most previously published works in the field of emotion recognition devote to performing emotion recognition by using either affective speech or facial expression. However, Affective spe...
متن کاملFusion of Facial Expressions and EEG for Multimodal Emotion Recognition
This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion sta...
متن کاملSimulating the Emotion Dynamics of a Multimodal Conversational Agent
We describe an implemented system for the simulation and visualisation of the emotional state of a multimodal conversational agent called Max. The focus of the presented work lies on modeling a coherent course of emotions over time. The basic idea of the underlying emotion system is the linkage of two interrelated psychological concepts: an emotion axis – representing short-time system states –...
متن کامل